Python AI Agent Frameworks Guide
Table of Contents
- Overview
- Framework Comparison Matrix
- Strands Agents
- LangGraph
- CrewAI
- LangChain
- AutoGen
- AWS Bedrock Integration
- Choosing the Right Framework
- Implementation Examples
Overview
AI agent frameworks provide the orchestration layer for building intelligent systems that can reason, plan, and execute tasks autonomously. This guide covers the major Python frameworks and their integration with AWS services, particularly Amazon Bedrock.
What is an AI Agent?
An AI agent is an autonomous system that: - Perceives its environment through inputs - Reasons about goals and plans - Takes actions using tools and APIs - Learns from feedback and outcomes
Why Use a Framework?
Without Framework:
├── Manual prompt engineering
├── Custom tool integration
├── State management from scratch
├── Error handling and retries
└── Observability and logging
With Framework:
├── ✅ Pre-built agent patterns
├── ✅ Tool integration abstractions
├── ✅ State management built-in
├── ✅ Automatic error handling
└── ✅ Observability and tracing
Framework Comparison Matrix
| Framework | Developer | Philosophy | Complexity | AWS Integration | Best For |
|---|---|---|---|---|---|
| Strands | AWS | Model-driven, minimal scaffolding | Low | Native (Bedrock-first) | AWS-native apps, quick prototypes |
| LangGraph | LangChain | Stateful graph workflows | Medium-High | Excellent | Complex workflows, state machines |
| CrewAI | CrewAI Inc | Role-based teams | Low-Medium | Good | Multi-agent collaboration |
| LangChain | LangChain | Composable chains | Medium | Excellent | RAG, chains, general purpose |
| AutoGen | Microsoft | Conversational agents | Medium | Good | Research, multi-agent conversations |
Key Characteristics
Architecture Patterns:
┌─────────────────────────────────────────────────────────────┐
│ FRAMEWORK PATTERNS │
├─────────────────────────────────────────────────────────────┤
│ │
│ STRANDS: Model-Driven │
│ ┌──────────┐ │
│ │ LLM │ ← Drives everything │
│ └────┬─────┘ │
│ ├─→ Plans │
│ ├─→ Executes Tools │
│ └─→ Reasons │
│ │
│ LANGGRAPH: State Machine │
│ ┌─────┐ ┌─────┐ ┌─────┐ │
│ │Node1│───→│Node2│───→│Node3│ │
│ └─────┘ └─────┘ └─────┘ │
│ ↑ │ │ │
│ └──────────┴──────────┘ │
│ │
│ CREWAI: Role-Based Teams │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │Researcher│→ │ Analyst │→ │ Writer │ │
│ └──────────┘ └──────────┘ └──────────┘ │
│ │
│ LANGCHAIN: Sequential Chains │
│ Input → Chain1 → Chain2 → Chain3 → Output │
│ │
│ AUTOGEN: Conversational │
│ ┌──────┐ ┌──────┐ │
│ │Agent1│ ←──→│Agent2│ │
│ └──────┘ └──────┘ │
│ ↕ ↕ │
│ ┌──────┐ ┌──────┐ │
│ │Agent3│ ←──→│Agent4│ │
│ └──────┘ └──────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
Strands Agents
Developer: AWS
License: Apache 2.0
GitHub: strands-agents/sdk-python
Philosophy
Strands takes a model-driven approach - trust the LLM to handle planning, reasoning, and tool execution with minimal scaffolding. The framework stays out of your way and lets the model do the heavy lifting.
Core Characteristics
Strengths: - ✅ Minimal code - agents in ~10 lines - ✅ AWS-native with Bedrock-first design - ✅ Model-agnostic (Bedrock, OpenAI, Anthropic, local models) - ✅ MCP (Model Context Protocol) tool integration - ✅ Streaming support built-in - ✅ Production-ready observability and tracing - ✅ Lightweight with minimal dependencies
Limitations: - ❌ Newer framework (less community resources) - ❌ Less control over agent workflow - ❌ Simpler patterns (not for complex state machines)
Architecture
# Strands Agent Architecture
from strands_agents import Agent, Tool
# 1. Define tools
tools = [
Tool(name="search", function=search_function),
Tool(name="calculator", function=calc_function)
]
# 2. Create agent (model drives everything)
agent = Agent(
model="bedrock/anthropic.claude-3-sonnet",
system_prompt="You are a helpful assistant",
tools=tools
)
# 3. Run - model decides what to do
response = agent.run("What's 25 * 4 and search for Python tutorials")
# Model automatically:
# - Plans the approach
# - Calls calculator tool
# - Calls search tool
# - Synthesizes response
AWS Bedrock Integration
Native Bedrock Support:
from strands_agents import Agent
import boto3
# Direct Bedrock integration
agent = Agent(
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0",
region="us-east-1",
tools=[...]
)
# Or with custom boto3 client
bedrock_client = boto3.client('bedrock-runtime', region_name='us-west-2')
agent = Agent(
model="bedrock/anthropic.claude-3-haiku",
client=bedrock_client,
tools=[...]
)
Use Cases
Best for: - AWS-native applications - Quick prototypes and MVPs - Simple to medium complexity agents - When you want minimal code - Bedrock-first deployments
Not ideal for: - Complex multi-step workflows with branching - Fine-grained control over execution flow - When you need explicit state machines
Example: Customer Support Agent
from strands_agents import Agent, Tool
def query_knowledge_base(query: str) -> str:
"""Query Bedrock Knowledge Base"""
# Integration with Bedrock KB
return kb_response
def create_ticket(issue: str, priority: str) -> str:
"""Create support ticket"""
return ticket_id
agent = Agent(
model="bedrock/anthropic.claude-3-sonnet",
system_prompt="""You are a customer support agent.
Help users with their questions using the knowledge base.
Create tickets for issues you cannot resolve.""",
tools=[
Tool(name="search_kb", function=query_knowledge_base),
Tool(name="create_ticket", function=create_ticket)
]
)
# Agent automatically decides when to search KB vs create ticket
response = agent.run("My order hasn't arrived in 2 weeks")
LangGraph
Developer: LangChain
License: MIT
GitHub: langchain-ai/langgraph
Philosophy
LangGraph treats agent workflows as stateful graphs where nodes represent actions and edges represent transitions. This gives you explicit control over execution flow, state management, and conditional branching.
Core Characteristics
Strengths: - ✅ Explicit control over workflow logic - ✅ Built-in state management and persistence - ✅ Conditional branching and loops - ✅ Human-in-the-loop support - ✅ Time-travel debugging - ✅ Excellent for complex workflows - ✅ Strong AWS Bedrock integration
Limitations: - ❌ Steeper learning curve - ❌ More verbose code - ❌ Requires understanding of graph concepts - ❌ Overkill for simple agents
Architecture
# LangGraph Architecture - Stateful Graph
from langgraph.graph import StateGraph, END
from langchain_aws import ChatBedrock
# 1. Define state
class AgentState(TypedDict):
messages: list
next_action: str
data: dict
# 2. Define nodes (actions)
def research_node(state: AgentState):
# Perform research
return {"data": research_results}
def analyze_node(state: AgentState):
# Analyze data
return {"data": analysis}
def decide_next(state: AgentState):
# Conditional routing
if state["data"]["confidence"] > 0.8:
return "finalize"
return "research"
# 3. Build graph
workflow = StateGraph(AgentState)
workflow.add_node("research", research_node)
workflow.add_node("analyze", analyze_node)
workflow.add_node("finalize", finalize_node)
# 4. Add edges (transitions)
workflow.add_edge("research", "analyze")
workflow.add_conditional_edges(
"analyze",
decide_next,
{"research": "research", "finalize": "finalize"}
)
workflow.add_edge("finalize", END)
# 5. Compile and run
app = workflow.compile()
result = app.invoke({"messages": [user_input]})
AWS Bedrock Integration
Bedrock Models in LangGraph:
from langgraph.graph import StateGraph
from langchain_aws import ChatBedrock
from langchain_core.messages import HumanMessage
# Initialize Bedrock model
llm = ChatBedrock(
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
region_name="us-east-1",
model_kwargs={"temperature": 0.7}
)
# Use in graph nodes
def agent_node(state):
messages = state["messages"]
response = llm.invoke(messages)
return {"messages": messages + [response]}
# Multi-agent with different Bedrock models
researcher_llm = ChatBedrock(model_id="anthropic.claude-3-sonnet")
analyst_llm = ChatBedrock(model_id="anthropic.claude-3-haiku") # Faster/cheaper
workflow = StateGraph(AgentState)
workflow.add_node("research", lambda s: research_with_llm(s, researcher_llm))
workflow.add_node("analyze", lambda s: analyze_with_llm(s, analyst_llm))
Integration with Bedrock Knowledge Bases:
from langchain_aws import AmazonKnowledgeBasesRetriever
# Bedrock KB as retriever in LangGraph
kb_retriever = AmazonKnowledgeBasesRetriever(
knowledge_base_id="KB123456",
retrieval_config={"vectorSearchConfiguration": {"numberOfResults": 5}}
)
def rag_node(state):
query = state["query"]
# Retrieve from Bedrock KB
docs = kb_retriever.get_relevant_documents(query)
# Generate with Bedrock model
response = llm.invoke(f"Context: {docs}\n\nQuestion: {query}")
return {"answer": response}
Use Cases
Best for: - Complex multi-step workflows - Workflows with conditional logic - State persistence requirements - Human-in-the-loop applications - When you need explicit control - Multi-agent orchestration with routing
Example: Research & Analysis Pipeline
from langgraph.graph import StateGraph, END
from langchain_aws import ChatBedrock
class ResearchState(TypedDict):
topic: str
research_data: list
analysis: str
retry_count: int
llm = ChatBedrock(model_id="anthropic.claude-3-sonnet")
def research(state):
# Research the topic
data = perform_research(state["topic"])
return {"research_data": data}
def analyze(state):
# Analyze research data
analysis = llm.invoke(f"Analyze: {state['research_data']}")
return {"analysis": analysis.content}
def quality_check(state):
# Decide if analysis is good enough
if len(state["analysis"]) > 500:
return "finalize"
elif state["retry_count"] < 3:
return "research" # Retry
return "finalize"
workflow = StateGraph(ResearchState)
workflow.add_node("research", research)
workflow.add_node("analyze", analyze)
workflow.add_node("finalize", lambda s: s)
workflow.set_entry_point("research")
workflow.add_edge("research", "analyze")
workflow.add_conditional_edges("analyze", quality_check)
workflow.add_edge("finalize", END)
app = workflow.compile()
result = app.invoke({"topic": "AI trends 2025", "retry_count": 0})
CrewAI
Developer: CrewAI Inc
License: MIT
GitHub: joaomdmoura/crewAI
Philosophy
CrewAI uses a role-based team metaphor where you define agents by their role, goal, and backstory, then assemble them into "crews" that collaborate toward shared objectives. Think of it as building a team of specialists.
Core Characteristics
Strengths: - ✅ Intuitive role-based design - ✅ Easy to understand and use - ✅ Great for multi-agent collaboration - ✅ Built-in task delegation - ✅ Sequential and hierarchical processes - ✅ Good documentation and examples - ✅ AWS Bedrock integration via tools
Limitations: - ❌ Less flexible than graph-based approaches - ❌ Opinionated structure - ❌ Limited control over execution flow - ❌ Can be verbose for simple tasks
Architecture
# CrewAI Architecture - Role-Based Teams
from crewai import Agent, Task, Crew
from langchain_aws import ChatBedrock
# 1. Define LLM
llm = ChatBedrock(model_id="anthropic.claude-3-sonnet")
# 2. Create agents with roles
researcher = Agent(
role="Research Specialist",
goal="Find accurate information on given topics",
backstory="Expert researcher with 10 years experience",
llm=llm,
tools=[search_tool, web_scraper]
)
analyst = Agent(
role="Data Analyst",
goal="Analyze research data and extract insights",
backstory="Senior analyst specializing in trend analysis",
llm=llm,
tools=[analysis_tool]
)
writer = Agent(
role="Content Writer",
goal="Create engaging content from analysis",
backstory="Professional writer with technical expertise",
llm=llm,
tools=[grammar_check]
)
# 3. Define tasks
research_task = Task(
description="Research AI trends in 2025",
agent=researcher,
expected_output="Comprehensive research report"
)
analysis_task = Task(
description="Analyze research findings",
agent=analyst,
expected_output="Key insights and trends"
)
writing_task = Task(
description="Write article based on analysis",
agent=writer,
expected_output="Publication-ready article"
)
# 4. Create crew
crew = Crew(
agents=[researcher, analyst, writer],
tasks=[research_task, analysis_task, writing_task],
process="sequential" # or "hierarchical"
)
# 5. Execute
result = crew.kickoff()
AWS Bedrock Integration
Using Bedrock Models:
from crewai import Agent, Task, Crew
from langchain_aws import ChatBedrock
# Different Bedrock models for different agents
researcher_llm = ChatBedrock(
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
region_name="us-east-1"
)
writer_llm = ChatBedrock(
model_id="anthropic.claude-3-haiku-20240307-v1:0", # Faster for writing
region_name="us-east-1"
)
researcher = Agent(
role="Researcher",
goal="Deep research",
llm=researcher_llm,
tools=[...]
)
writer = Agent(
role="Writer",
goal="Create content",
llm=writer_llm,
tools=[...]
)
Integration with Bedrock Agents:
from crewai_tools import BedrockInvokeAgentTool
# Use Bedrock Agent as a tool in CrewAI
bedrock_agent_tool = BedrockInvokeAgentTool(
agent_id="AGENT123",
agent_alias_id="ALIAS456",
region_name="us-east-1"
)
# CrewAI agent can invoke Bedrock Agent
crewai_agent = Agent(
role="Coordinator",
goal="Coordinate with Bedrock agents",
tools=[bedrock_agent_tool],
llm=ChatBedrock(model_id="anthropic.claude-3-sonnet")
)
Bedrock Knowledge Bases Integration:
from langchain_aws import AmazonKnowledgeBasesRetriever
from crewai_tools import LangChainTool
# Wrap Bedrock KB as CrewAI tool
kb_retriever = AmazonKnowledgeBasesRetriever(
knowledge_base_id="KB123456"
)
kb_tool = LangChainTool(
name="Company Knowledge Base",
description="Search company documentation and policies",
func=lambda q: kb_retriever.get_relevant_documents(q)
)
support_agent = Agent(
role="Support Specialist",
goal="Help customers using company knowledge",
tools=[kb_tool],
llm=ChatBedrock(model_id="anthropic.claude-3-sonnet")
)
Use Cases
Best for: - Multi-agent collaboration scenarios - Role-based task delegation - Content creation pipelines - Research and analysis workflows - When team metaphor fits naturally - Regulatory compliance automation
Example: Compliance Review System
from crewai import Agent, Task, Crew
from langchain_aws import ChatBedrock, AmazonKnowledgeBasesRetriever
llm = ChatBedrock(model_id="anthropic.claude-3-sonnet")
# Agent 1: Document Reviewer
reviewer = Agent(
role="Compliance Reviewer",
goal="Review documents for regulatory compliance",
backstory="Expert in financial regulations with 15 years experience",
llm=llm,
tools=[kb_search_tool]
)
# Agent 2: Risk Assessor
assessor = Agent(
role="Risk Assessor",
goal="Assess compliance risks and severity",
backstory="Risk management specialist",
llm=llm,
tools=[risk_calculator]
)
# Agent 3: Report Generator
reporter = Agent(
role="Report Generator",
goal="Generate compliance reports",
backstory="Technical writer specializing in compliance",
llm=llm,
tools=[template_tool]
)
# Tasks
review_task = Task(
description="Review contract for compliance issues",
agent=reviewer
)
assess_task = Task(
description="Assess risk level of identified issues",
agent=assessor
)
report_task = Task(
description="Generate compliance report",
agent=reporter
)
# Crew
compliance_crew = Crew(
agents=[reviewer, assessor, reporter],
tasks=[review_task, assess_task, report_task],
process="sequential"
)
result = compliance_crew.kickoff(inputs={"contract": contract_text})
LangChain
Developer: LangChain
License: MIT
GitHub: langchain-ai/langchain
Philosophy
LangChain provides composable building blocks (chains, agents, tools, memory) that you can combine to build LLM applications. It's the most mature and comprehensive framework with extensive integrations.
Core Characteristics
Strengths: - ✅ Mature ecosystem with extensive integrations - ✅ Comprehensive documentation - ✅ Large community and resources - ✅ Excellent AWS Bedrock support - ✅ RAG capabilities built-in - ✅ Flexible and composable - ✅ Production-ready features
Limitations: - ❌ Can be overwhelming for beginners - ❌ Frequent API changes - ❌ Sometimes over-engineered for simple tasks - ❌ Large dependency footprint
Architecture
# LangChain Architecture - Composable Chains
from langchain_aws import ChatBedrock
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import tool
# 1. Initialize LLM
llm = ChatBedrock(
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
region_name="us-east-1"
)
# 2. Define tools
@tool
def search_database(query: str) -> str:
"""Search the company database"""
return db_results
@tool
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email"""
return "Email sent"
tools = [search_database, send_email]
# 3. Create prompt
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}")
])
# 4. Create agent
agent = create_tool_calling_agent(llm, tools, prompt)
# 5. Create executor
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True
)
# 6. Run
result = agent_executor.invoke({"input": "Search for customer X and email them"})
AWS Bedrock Integration
Comprehensive Bedrock Support:
from langchain_aws import ChatBedrock, BedrockEmbeddings
from langchain_aws import AmazonKnowledgeBasesRetriever
from langchain.agents import AgentExecutor
# 1. Chat Models
chat = ChatBedrock(
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
model_kwargs={"temperature": 0.7, "max_tokens": 4096}
)
# 2. Embeddings
embeddings = BedrockEmbeddings(
model_id="amazon.titan-embed-text-v2:0"
)
# 3. Knowledge Bases
kb_retriever = AmazonKnowledgeBasesRetriever(
knowledge_base_id="KB123456",
retrieval_config={
"vectorSearchConfiguration": {
"numberOfResults": 5,
"overrideSearchType": "HYBRID"
}
}
)
# 4. Bedrock Agents
from langchain_aws.agents import BedrockAgentsRunnable
bedrock_agent = BedrockAgentsRunnable(
agent_id="AGENT123",
agent_alias_id="ALIAS456"
)
response = bedrock_agent.invoke({
"input": "What's the weather?",
"session_id": "session123"
})
RAG with Bedrock:
from langchain_aws import ChatBedrock, BedrockEmbeddings
from langchain_community.vectorstores import FAISS
from langchain.chains import RetrievalQA
from langchain.text_splitter import RecursiveCharacterTextSplitter
# 1. Load and split documents
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200
)
docs = text_splitter.split_documents(documents)
# 2. Create embeddings with Bedrock
embeddings = BedrockEmbeddings(
model_id="amazon.titan-embed-text-v2:0"
)
# 3. Create vector store
vectorstore = FAISS.from_documents(docs, embeddings)
# 4. Create retrieval chain
llm = ChatBedrock(model_id="anthropic.claude-3-sonnet")
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=vectorstore.as_retriever(search_kwargs={"k": 5})
)
# 5. Query
answer = qa_chain.invoke({"query": "What is the return policy?"})
Use Cases
Best for: - RAG applications - General-purpose LLM apps - When you need extensive integrations - Production applications - Document processing pipelines - Chatbots and conversational AI
Example: Customer Support RAG System
from langchain_aws import ChatBedrock, AmazonKnowledgeBasesRetriever
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
# Bedrock components
llm = ChatBedrock(model_id="anthropic.claude-3-sonnet")
retriever = AmazonKnowledgeBasesRetriever(
knowledge_base_id="KB123456"
)
# Memory for conversation history
memory = ConversationBufferMemory(
memory_key="chat_history",
return_messages=True,
output_key="answer"
)
# Conversational RAG chain
qa_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
memory=memory,
return_source_documents=True
)
# Multi-turn conversation
response1 = qa_chain({"question": "What's your return policy?"})
response2 = qa_chain({"question": "How long does it take?"}) # Remembers context
AutoGen
Developer: Microsoft
License: MIT
GitHub: microsoft/autogen
Philosophy
AutoGen enables multi-agent conversations where agents communicate with each other to solve problems collaboratively. It emphasizes agent-to-agent interaction and supports both autonomous and human-in-the-loop workflows.
Core Characteristics
Strengths: - ✅ Powerful multi-agent conversations - ✅ Human-in-the-loop support - ✅ Code execution capabilities - ✅ Flexible conversation patterns - ✅ Good for research and experimentation - ✅ Microsoft ecosystem integration
Limitations: - ❌ Complex setup for production - ❌ Less AWS-native than other frameworks - ❌ Steeper learning curve - ❌ Can be unpredictable with many agents
Architecture
# AutoGen Architecture - Conversational Agents
import autogen
# 1. Configure LLM
config_list = [{
"model": "anthropic.claude-3-sonnet",
"api_type": "bedrock",
"region": "us-east-1"
}]
# 2. Create agents
user_proxy = autogen.UserProxyAgent(
name="User",
human_input_mode="NEVER",
code_execution_config={"work_dir": "coding"}
)
assistant = autogen.AssistantAgent(
name="Assistant",
llm_config={"config_list": config_list}
)
critic = autogen.AssistantAgent(
name="Critic",
system_message="Review and critique solutions",
llm_config={"config_list": config_list}
)
# 3. Initiate conversation
user_proxy.initiate_chat(
assistant,
message="Write a Python function to calculate fibonacci"
)
# Agents converse until task is complete
AWS Bedrock Integration
Using Bedrock with AutoGen:
import autogen
import boto3
from typing import Dict, List
# Custom Bedrock client for AutoGen
class BedrockClient:
def __init__(self, model_id: str, region: str):
self.client = boto3.client('bedrock-runtime', region_name=region)
self.model_id = model_id
def create(self, messages: List[Dict], **kwargs):
# Convert to Bedrock format and invoke
response = self.client.invoke_model(
modelId=self.model_id,
body=json.dumps({
"messages": messages,
"max_tokens": kwargs.get("max_tokens", 4096),
"temperature": kwargs.get("temperature", 0.7)
})
)
return response
# Configure AutoGen with Bedrock
config_list = [{
"model": "anthropic.claude-3-sonnet-20240229-v1:0",
"api_type": "bedrock",
"region_name": "us-east-1",
"bedrock_client": BedrockClient(
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
region="us-east-1"
)
}]
# Create agents with Bedrock
assistant = autogen.AssistantAgent(
name="Assistant",
llm_config={"config_list": config_list}
)
Use Cases
Best for: - Research and experimentation - Code generation and execution - Multi-agent debates and discussions - Complex problem-solving requiring collaboration - When agent-to-agent communication is key
Example: Code Review System
import autogen
config_list = [{"model": "bedrock/anthropic.claude-3-sonnet"}]
# Developer agent
developer = autogen.AssistantAgent(
name="Developer",
system_message="You write Python code",
llm_config={"config_list": config_list}
)
# Reviewer agent
reviewer = autogen.AssistantAgent(
name="Reviewer",
system_message="You review code for bugs and improvements",
llm_config={"config_list": config_list}
)
# Tester agent
tester = autogen.UserProxyAgent(
name="Tester",
human_input_mode="NEVER",
code_execution_config={"work_dir": "tests"}
)
# Group chat
groupchat = autogen.GroupChat(
agents=[developer, reviewer, tester],
messages=[],
max_round=10
)
manager = autogen.GroupChatManager(
groupchat=groupchat,
llm_config={"config_list": config_list}
)
# Start collaborative code review
tester.initiate_chat(
manager,
message="Create a function to validate email addresses"
)
AWS Bedrock Integration
Bedrock-Specific Considerations
Model Selection by Framework:
| Framework | Bedrock Integration | Ease of Use | Recommended Models |
|---|---|---|---|
| Strands | Native, first-class | ⭐⭐⭐⭐⭐ | All Bedrock models |
| LangGraph | Excellent via LangChain | ⭐⭐⭐⭐ | Claude 3, Titan |
| CrewAI | Good via LangChain | ⭐⭐⭐⭐ | Claude 3 Sonnet |
| LangChain | Excellent, comprehensive | ⭐⭐⭐⭐⭐ | All Bedrock models |
| AutoGen | Custom integration needed | ⭐⭐⭐ | Claude 3 |
Common Bedrock Patterns
1. Using Multiple Bedrock Models:
# Different models for different tasks
from langchain_aws import ChatBedrock
# Expensive, powerful model for complex reasoning
sonnet = ChatBedrock(
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
model_kwargs={"temperature": 0.7}
)
# Cheaper, faster model for simple tasks
haiku = ChatBedrock(
model_id="anthropic.claude-3-haiku-20240307-v1:0",
model_kwargs={"temperature": 0.5}
)
# Use appropriately
complex_analysis = sonnet.invoke("Analyze this complex data...")
simple_summary = haiku.invoke("Summarize this text...")
2. Bedrock Knowledge Bases Integration:
# Works with all frameworks via LangChain
from langchain_aws import AmazonKnowledgeBasesRetriever
kb_retriever = AmazonKnowledgeBasesRetriever(
knowledge_base_id="YOUR_KB_ID",
retrieval_config={
"vectorSearchConfiguration": {
"numberOfResults": 5,
"overrideSearchType": "HYBRID" # Vector + keyword
}
}
)
# Use in any framework
docs = kb_retriever.get_relevant_documents("query")
3. Bedrock Guardrails:
from langchain_aws import ChatBedrock
# Apply guardrails to any Bedrock model
llm = ChatBedrock(
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
guardrails={
"guardrailIdentifier": "your-guardrail-id",
"guardrailVersion": "1",
"trace": "enabled"
}
)
# Guardrails automatically filter harmful content
response = llm.invoke("User input...")
4. Cost Optimization with Bedrock:
# Strategy: Use cheaper models where possible
from langchain_aws import ChatBedrock
class CostOptimizedAgent:
def __init__(self):
# Haiku for simple tasks (cheapest)
self.haiku = ChatBedrock(
model_id="anthropic.claude-3-haiku-20240307-v1:0"
)
# Sonnet for complex tasks (balanced)
self.sonnet = ChatBedrock(
model_id="anthropic.claude-3-sonnet-20240229-v1:0"
)
# Opus for critical tasks (most expensive)
self.opus = ChatBedrock(
model_id="anthropic.claude-3-opus-20240229-v1:0"
)
def route_query(self, query: str, complexity: str):
if complexity == "simple":
return self.haiku.invoke(query)
elif complexity == "medium":
return self.sonnet.invoke(query)
else:
return self.opus.invoke(query)
Non-Bedrock AWS Services Integration
1. Lambda Deployment:
# Deploy any framework as Lambda function
import json
from strands_agents import Agent
# Initialize agent (cold start)
agent = Agent(
model="bedrock/anthropic.claude-3-sonnet",
tools=[...]
)
def lambda_handler(event, context):
user_input = event['body']
response = agent.run(user_input)
return {
'statusCode': 200,
'body': json.dumps({'response': response})
}
2. S3 for Document Storage:
import boto3
from langchain_community.document_loaders import S3FileLoader
# Load documents from S3
s3_client = boto3.client('s3')
loader = S3FileLoader(
bucket="my-docs-bucket",
key="documents/policy.pdf"
)
documents = loader.load()
# Process with any framework
3. DynamoDB for State Persistence:
import boto3
from boto3.dynamodb.conditions import Key
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('agent-state')
# Save agent state
def save_state(session_id: str, state: dict):
table.put_item(
Item={
'session_id': session_id,
'state': state,
'timestamp': int(time.time())
}
)
# Load agent state
def load_state(session_id: str):
response = table.get_item(Key={'session_id': session_id})
return response.get('Item', {}).get('state', {})
4. EventBridge for Agent Orchestration:
import boto3
eventbridge = boto3.client('events')
# Trigger agent workflows via events
def trigger_agent_workflow(workflow_type: str, data: dict):
eventbridge.put_events(
Entries=[{
'Source': 'agent.system',
'DetailType': workflow_type,
'Detail': json.dumps(data)
}]
)
Choosing the Right Framework
Decision Matrix
┌─────────────────────────────────────────────────────────────┐
│ FRAMEWORK SELECTION GUIDE │
├─────────────────────────────────────────────────────────────┤
│ │
│ START HERE │
│ │ │
│ ▼ │
│ Are you building on AWS? │
│ │ │
│ ├─ YES ──→ Need complex workflows? │
│ │ │ │
│ │ ├─ YES ──→ LangGraph │
│ │ │ │
│ │ └─ NO ──→ Strands (simplest) │
│ │ │
│ └─ NO ──→ Need multi-agent teams? │
│ │ │
│ ├─ YES ──→ CrewAI or AutoGen │
│ │ │
│ └─ NO ──→ LangChain (most flexible) │
│ │
└─────────────────────────────────────────────────────────────┘
Use Case Recommendations
Simple Chatbot / Assistant: - ✅ Strands - Minimal code, fast setup - ✅ LangChain - If you need RAG - ❌ LangGraph - Overkill - ❌ CrewAI - Overkill - ❌ AutoGen - Overkill
RAG Application: - ✅ LangChain - Best RAG support - ✅ LangGraph - If complex retrieval logic - ✅ Strands - Simple RAG - ⚠️ CrewAI - Possible but not ideal - ❌ AutoGen - Not designed for RAG
Multi-Agent Collaboration: - ✅ CrewAI - Easiest for role-based teams - ✅ LangGraph - Most control - ✅ AutoGen - Best for agent conversations - ⚠️ LangChain - Possible but verbose - ❌ Strands - Not designed for this
Complex Workflow with Branching: - ✅ LangGraph - Purpose-built for this - ⚠️ LangChain - Possible but complex - ❌ Strands - Too simple - ❌ CrewAI - Limited branching - ❌ AutoGen - Not ideal
AWS-Native Application: - ✅ Strands - AWS-first design - ✅ LangChain - Excellent Bedrock support - ✅ LangGraph - Great Bedrock support - ⚠️ CrewAI - Good via LangChain - ⚠️ AutoGen - Requires custom integration
Research / Experimentation: - ✅ AutoGen - Great for exploration - ✅ LangGraph - Good debugging tools - ✅ Strands - Quick prototyping - ⚠️ LangChain - Can be heavy - ⚠️ CrewAI - More opinionated
Complexity vs Control Trade-off
High Control
│
│ LangGraph ●
│
│ AutoGen ●
│
│ LangChain ●
│
│ CrewAI ●
│
│ Strands ●
│
Low Control
└────────────────────────────────────→
Low Complexity High Complexity
Team Skill Level Considerations
Beginner Team: 1. Strands - Simplest, minimal concepts 2. CrewAI - Intuitive role-based model 3. LangChain - Good docs but more to learn
Intermediate Team: 1. LangChain - Flexible, well-documented 2. CrewAI - Easy multi-agent 3. LangGraph - If workflows are complex
Advanced Team: 1. LangGraph - Maximum control 2. AutoGen - Complex multi-agent 3. Custom - Build your own
Implementation Examples
Example 1: Customer Support Agent (Strands)
Scenario: Simple customer support bot with knowledge base access
from strands_agents import Agent, Tool
import boto3
# Bedrock KB tool
def search_knowledge_base(query: str) -> str:
"""Search company knowledge base"""
client = boto3.client('bedrock-agent-runtime')
response = client.retrieve(
knowledgeBaseId='KB123456',
retrievalQuery={'text': query}
)
return response['retrievalResults']
# Create agent
agent = Agent(
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0",
system_prompt="""You are a helpful customer support agent.
Use the knowledge base to answer questions accurately.
If you cannot find an answer, politely say so.""",
tools=[
Tool(name="search_kb", function=search_knowledge_base)
]
)
# Use
response = agent.run("What's your return policy for electronics?")
print(response)
Why Strands? - Simple use case - Minimal code required - AWS-native - No complex workflows needed
Example 2: Document Analysis Pipeline (LangGraph)
Scenario: Multi-step document analysis with quality checks and retries
from langgraph.graph import StateGraph, END
from langchain_aws import ChatBedrock
from typing import TypedDict
class AnalysisState(TypedDict):
document: str
summary: str
key_points: list
quality_score: float
retry_count: int
llm = ChatBedrock(model_id="anthropic.claude-3-sonnet-20240229-v1:0")
# Nodes
def summarize(state: AnalysisState):
summary = llm.invoke(f"Summarize: {state['document']}")
return {"summary": summary.content}
def extract_points(state: AnalysisState):
points = llm.invoke(f"Extract key points from: {state['summary']}")
return {"key_points": points.content.split('\n')}
def quality_check(state: AnalysisState):
# Check quality
score = len(state['key_points']) / 10.0
return {"quality_score": score}
def route_decision(state: AnalysisState):
if state['quality_score'] >= 0.7:
return "finalize"
elif state['retry_count'] < 3:
return "retry"
return "finalize"
def retry_node(state: AnalysisState):
return {"retry_count": state['retry_count'] + 1}
# Build graph
workflow = StateGraph(AnalysisState)
workflow.add_node("summarize", summarize)
workflow.add_node("extract", extract_points)
workflow.add_node("check", quality_check)
workflow.add_node("retry", retry_node)
workflow.add_node("finalize", lambda s: s)
workflow.set_entry_point("summarize")
workflow.add_edge("summarize", "extract")
workflow.add_edge("extract", "check")
workflow.add_conditional_edges(
"check",
route_decision,
{"retry": "summarize", "finalize": "finalize"}
)
workflow.add_edge("finalize", END)
app = workflow.compile()
# Run
result = app.invoke({
"document": long_document,
"retry_count": 0
})
Why LangGraph? - Complex workflow with conditional logic - Retry mechanism needed - State persistence across steps - Quality checks and routing
Example 3: Content Creation Team (CrewAI)
Scenario: Multi-agent team for blog post creation
from crewai import Agent, Task, Crew
from langchain_aws import ChatBedrock
llm = ChatBedrock(model_id="anthropic.claude-3-sonnet-20240229-v1:0")
# Define agents
researcher = Agent(
role="Research Specialist",
goal="Research topics thoroughly and gather accurate information",
backstory="Expert researcher with 10 years of experience in tech",
llm=llm,
tools=[web_search_tool, kb_search_tool],
verbose=True
)
writer = Agent(
role="Content Writer",
goal="Write engaging, SEO-optimized blog posts",
backstory="Professional tech writer with strong SEO skills",
llm=llm,
tools=[grammar_tool],
verbose=True
)
editor = Agent(
role="Editor",
goal="Review and improve content quality",
backstory="Senior editor with eye for detail",
llm=llm,
verbose=True
)
# Define tasks
research_task = Task(
description="Research the topic: {topic}. Find latest trends and statistics.",
agent=researcher,
expected_output="Comprehensive research report with sources"
)
writing_task = Task(
description="Write a 1000-word blog post based on the research",
agent=writer,
expected_output="Complete blog post with SEO optimization"
)
editing_task = Task(
description="Review and edit the blog post for quality and clarity",
agent=editor,
expected_output="Polished, publication-ready blog post"
)
# Create crew
content_crew = Crew(
agents=[researcher, writer, editor],
tasks=[research_task, writing_task, editing_task],
process="sequential",
verbose=True
)
# Execute
result = content_crew.kickoff(inputs={"topic": "AI Agents in 2025"})
print(result)
Why CrewAI? - Natural role-based structure - Sequential task delegation - Each agent has specialized role - Team collaboration metaphor fits
Example 4: RAG Chatbot (LangChain)
Scenario: Conversational chatbot with memory and RAG
from langchain_aws import ChatBedrock, AmazonKnowledgeBasesRetriever
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
# Initialize components
llm = ChatBedrock(
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
model_kwargs={"temperature": 0.7}
)
retriever = AmazonKnowledgeBasesRetriever(
knowledge_base_id="KB123456",
retrieval_config={
"vectorSearchConfiguration": {
"numberOfResults": 5
}
}
)
memory = ConversationBufferMemory(
memory_key="chat_history",
return_messages=True,
output_key="answer"
)
# Create conversational RAG chain
qa_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
memory=memory,
return_source_documents=True,
verbose=True
)
# Multi-turn conversation
print("Chatbot: Hello! How can I help you today?")
while True:
user_input = input("You: ")
if user_input.lower() in ['exit', 'quit']:
break
response = qa_chain({"question": user_input})
print(f"Chatbot: {response['answer']}")
# Show sources
if response.get('source_documents'):
print("\nSources:")
for doc in response['source_documents'][:2]:
print(f"- {doc.metadata.get('source', 'Unknown')}")
Why LangChain? - Excellent RAG support - Built-in memory management - Bedrock KB integration - Mature conversational chains
Example 5: Code Review System (AutoGen)
Scenario: Multi-agent code review with execution
import autogen
# Configure Bedrock
config_list = [{
"model": "anthropic.claude-3-sonnet-20240229-v1:0",
"api_type": "bedrock",
"region_name": "us-east-1"
}]
# Create agents
developer = autogen.AssistantAgent(
name="Developer",
system_message="You are a Python developer. Write clean, efficient code.",
llm_config={"config_list": config_list}
)
reviewer = autogen.AssistantAgent(
name="CodeReviewer",
system_message="""You are a code reviewer. Check for:
- Bugs and errors
- Performance issues
- Security vulnerabilities
- Best practices""",
llm_config={"config_list": config_list}
)
executor = autogen.UserProxyAgent(
name="Executor",
human_input_mode="NEVER",
code_execution_config={
"work_dir": "code_review",
"use_docker": False
}
)
# Group chat for collaboration
groupchat = autogen.GroupChat(
agents=[developer, reviewer, executor],
messages=[],
max_round=12
)
manager = autogen.GroupChatManager(
groupchat=groupchat,
llm_config={"config_list": config_list}
)
# Start code review process
executor.initiate_chat(
manager,
message="""Create a Python function to validate email addresses.
Include error handling and unit tests."""
)
Why AutoGen? - Multi-agent conversation needed - Code execution capability - Collaborative problem-solving - Iterative review process
Summary
Quick Reference
| Need | Framework | Reason |
|---|---|---|
| Simplest setup | Strands | Minimal code, AWS-native |
| Complex workflows | LangGraph | State machines, branching |
| Multi-agent teams | CrewAI | Role-based, intuitive |
| RAG application | LangChain | Best RAG support |
| Agent conversations | AutoGen | Multi-agent dialogue |
| AWS Bedrock-first | Strands or LangChain | Native integration |
| Maximum control | LangGraph | Explicit workflow control |
| Quick prototype | Strands | Fastest to implement |
Installation Commands
# Strands
pip install strands-agents boto3
# LangGraph
pip install langgraph langchain-aws
# CrewAI
pip install crewai crewai-tools langchain-aws
# LangChain
pip install langchain langchain-aws boto3
# AutoGen
pip install pyautogen boto3
Key Takeaways
- Start Simple: Begin with Strands or LangChain for most use cases
- AWS Integration: Strands and LangChain have the best Bedrock support
- Complexity Matters: Use LangGraph only when you need complex workflows
- Team Structure: CrewAI excels when your problem maps to roles
- Experimentation: AutoGen is great for research and exploration
- Production Ready: LangChain and Strands are most production-ready
- Community: LangChain has the largest community and resources
- Cost Optimization: All frameworks support multiple Bedrock models for cost control
Resources
Official Documentation: - Strands Agents - LangGraph - CrewAI - LangChain - AutoGen
AWS Resources: - Amazon Bedrock Documentation - Bedrock Agents - Bedrock Knowledge Bases
Community: - LangChain Discord - CrewAI Discord - AWS re:Post
Content rephrased for compliance with licensing restrictions. Information synthesized from official documentation and AWS blog posts.
Sources: - AWS Blog: Build multi-agent systems with LangGraph and Amazon Bedrock - AWS Blog: Build agentic systems with CrewAI and Amazon Bedrock - Glama AI: AWS Strands Agents SDK - LangFuse: Comparing Open-Source AI Agent Frameworks